1,410 research outputs found

    Spectral and Temporal Interrogation of Cerebral Hemodynamics Via High Speed Laser Speckle Contrast Imaging

    Get PDF
    Laser Speckle Contrast Imaging (LSCI) is a non-scanning wide field-of-view optical imaging technique specifically developed for cerebral blood flow (CBF) monitoring. In this project, a versatile Laser speckle contrast imaging system has been designed and developed to monitor CBF changes and examine the physical properties of cerebral vasculature during functional brain activation experiments. The hardware of the system consists of a high speed CMOS camera, a coherent light source, a trinocular microscope, and a PC that does camera controlling and data storage. The simplicity of the system’s hardware makes it suitable for biological experiments. In controlled flow experiments using a custom made microfluidic channel, the linearity of the CBF estimates was evaluated under high speed imaging settings. Under the camera exposure time setting in the range of tens of micro-seconds, results show a linear relationship between the CBF estimates and the flow rates within the microchannel. This validation permitted LSCI to be used in high frame rate imaging and the method is only limited by the camera speed. In an in vivo experiment, the amount of oxygen intake via breathing by a rat was reduced to 12% to induce the dilation of the vessels. Results demonstrated a positive correlation between the system’s CBF estimates and the pulse wave velocity derived from aortic blood pressure. To exemplify the instantaneous pulsatility flow study acquired at high sampling rate, a pulsatile cerebral blood flow analysis was conducted on two vessels, an arteriole and a venule. The pulsatile waveform results, captured under sampling rate close to 2000 Hz. The pulse of the arteriole rises 13ms faster than the pulse of the venule, and it takes 6ms longer for the pulse of the arteriole to fall below the lower fall-time boundary. By using the second order derivative (accelerated) CBF estimates, the vascular stiffness was evaluated. Results show the arteriole and the venule have increased-vascular-stiffness indices of 0.95 and 0.74. On the other side, the arteriole and the venule have decreased-vascular-stiffness indices of 0.125 and 0.35. Both vascular stiffness indices suggested that the wall of arteriole is more rigid than the venule. The proposed LSCI system can monitor the mean flow over function activation experiment, and the interrogation of blood flow in terms of physiological oscillations. The proposed vascular stiffness metrics for estimating the stroke preliminary symptom, may eventually lead to insights of stroke and its causes

    TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery

    Full text link
    Temporal graphs are widely used to model dynamic systems with time-varying interactions. In real-world scenarios, the underlying mechanisms of generating future interactions in dynamic systems are typically governed by a set of recurring substructures within the graph, known as temporal motifs. Despite the success and prevalence of current temporal graph neural networks (TGNN), it remains uncertain which temporal motifs are recognized as the significant indications that trigger a certain prediction from the model, which is a critical challenge for advancing the explainability and trustworthiness of current TGNNs. To address this challenge, we propose a novel approach, called Temporal Motifs Explainer (TempME), which uncovers the most pivotal temporal motifs guiding the prediction of TGNNs. Derived from the information bottleneck principle, TempME extracts the most interaction-related motifs while minimizing the amount of contained information to preserve the sparsity and succinctness of the explanation. Events in the explanations generated by TempME are verified to be more spatiotemporally correlated than those of existing approaches, providing more understandable insights. Extensive experiments validate the superiority of TempME, with up to 8.21% increase in terms of explanation accuracy across six real-world datasets and up to 22.96% increase in boosting the prediction Average Precision of current TGNNs.Comment: Accepted at NeurIPS 2023, Camera Ready Versio

    Graph Convolutional Neural Networks for Web-Scale Recommender Systems

    Full text link
    Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.Comment: KDD 201

    The Real Deal: A Review of Challenges and Opportunities in Moving Reinforcement Learning-Based Traffic Signal Control Systems Towards Reality

    Full text link
    Traffic signal control (TSC) is a high-stakes domain that is growing in importance as traffic volume grows globally. An increasing number of works are applying reinforcement learning (RL) to TSC; RL can draw on an abundance of traffic data to improve signalling efficiency. However, RL-based signal controllers have never been deployed. In this work, we provide the first review of challenges that must be addressed before RL can be deployed for TSC. We focus on four challenges involving (1) uncertainty in detection, (2) reliability of communications, (3) compliance and interpretability, and (4) heterogeneous road users. We show that the literature on RL-based TSC has made some progress towards addressing each challenge. However, more work should take a systems thinking approach that considers the impacts of other pipeline components on RL.Comment: 26 pages; accepted version, with shortened version published at the 12th International Workshop on Agents in Traffic and Transportation (ATT '22) at IJCAI 202

    One-Dimensional Sensor Learns to Sense Three-Dimensional Space

    Get PDF
    A sensor system with ultra-high sensitivity, high resolution, rapid response time, and a high signal-to-noise ratio can produce raw data that is exceedingly rich in information, including signals that have the appearances of noise . The noise feature directly correlates to measurands in orthogonal dimensions, and are simply manifestations of the off-diagonal elements of 2nd-order tensors that describe the spatial anisotropy of matter in physical structures and spaces. The use of machine learning techniques to extract useful meanings from the rich information afforded by ultra-sensitive one-dimensional sensors may offer the potential for probing mundane events for novel embedded phenomena. Inspired by our very recent invention of ultra-sensitive optical-based inclinometers, this work aims to answer a transformative question for the first time: can a single-dimension point sensor with ultra-high sensitivity, fidelity, and signal-to-noise ratio identify an arbitrary mechanical impact event in three-dimensional space? This work is expected to inspire researchers in the fields of sensing and measurement to promote the development of a new generation of powerful sensors or sensor networks with expanded functionalities and enhanced intelligence, which may provide rich n-dimensional information, and subsequently, data-driven insights into significant problems

    D4Explainer: In-Distribution GNN Explanations via Discrete Denoising Diffusion

    Full text link
    The widespread deployment of Graph Neural Networks (GNNs) sparks significant interest in their explainability, which plays a vital role in model auditing and ensuring trustworthy graph learning. The objective of GNN explainability is to discern the underlying graph structures that have the most significant impact on model predictions. Ensuring that explanations generated are reliable necessitates consideration of the in-distribution property, particularly due to the vulnerability of GNNs to out-of-distribution data. Unfortunately, prevailing explainability methods tend to constrain the generated explanations to the structure of the original graph, thereby downplaying the significance of the in-distribution property and resulting in explanations that lack reliability. To address these challenges, we propose D4Explainer, a novel approach that provides in-distribution GNN explanations for both counterfactual and model-level explanation scenarios. The proposed D4Explainer incorporates generative graph distribution learning into the optimization objective, which accomplishes two goals: 1) generate a collection of diverse counterfactual graphs that conform to the in-distribution property for a given instance, and 2) identify the most discriminative graph patterns that contribute to a specific class prediction, thus serving as model-level explanations. It is worth mentioning that D4Explainer is the first unified framework that combines both counterfactual and model-level explanations. Empirical evaluations conducted on synthetic and real-world datasets provide compelling evidence of the state-of-the-art performance achieved by D4Explainer in terms of explanation accuracy, faithfulness, diversity, and robustness.Comment: Accepted at NeurIPS 2023, Camera Ready Versio

    Generative Explanations for Graph Neural Network: Methods and Evaluations

    Full text link
    Graph Neural Networks (GNNs) achieve state-of-the-art performance in various graph-related tasks. However, the black-box nature often limits their interpretability and trustworthiness. Numerous explainability methods have been proposed to uncover the decision-making logic of GNNs, by generating underlying explanatory substructures. In this paper, we conduct a comprehensive review of the existing explanation methods for GNNs from the perspective of graph generation. Specifically, we propose a unified optimization objective for generative explanation methods, comprising two sub-objectives: Attribution and Information constraints. We further demonstrate their specific manifestations in various generative model architectures and different explanation scenarios. With the unified objective of the explanation problem, we reveal the shared characteristics and distinctions among current methods, laying the foundation for future methodological advancements. Empirical results demonstrate the advantages and limitations of different explainability approaches in terms of explanation performance, efficiency, and generalizability

    Purpose in the Machine: Do Traffic Simulators Produce Distributionally Equivalent Outcomes for Reinforcement Learning Applications?

    Full text link
    Traffic simulators are used to generate data for learning in intelligent transportation systems (ITSs). A key question is to what extent their modelling assumptions affect the capabilities of ITSs to adapt to various scenarios when deployed in the real world. This work focuses on two simulators commonly used to train reinforcement learning (RL) agents for traffic applications, CityFlow and SUMO. A controlled virtual experiment varying driver behavior and simulation scale finds evidence against distributional equivalence in RL-relevant measures from these simulators, with the root mean squared error and KL divergence being significantly greater than 0 for all assessed measures. While granular real-world validation generally remains infeasible, these findings suggest that traffic simulators are not a deus ex machina for RL training: understanding the impacts of inter-simulator differences is necessary to train and deploy RL-based ITSs.Comment: 12 pages; accepted version, published at the 2023 Winter Simulation Conference (WSC '23
    • …
    corecore